28 research outputs found
A BASILar Approach for Building Web APIs on top of SPARQL Endpoints
The heterogeneity of methods and technologies to publish open data is still an issue to develop distributed systems on the Web. On the one hand, Web APIs, the most popular approach to offer data services, implement REST principles, which focus on addressing loose coupling and interoperability issues. On the other hand, Linked Data, available through SPARQL endpoints, focus on data integration between distributed data sources. The paper proposes BASIL, an approach to build Web APIs on top of SPARQL endpoints, in order to benefit of the advantages from both Web APIs and Linked Data approaches. Compared to similar solution, BASIL aims on minimising the learning curve for users to promote its adoption. The main feature of BASIL is a simple API that does not introduce new specifications, formalisms and technologies for users that belong to both Web APIs and Linked Data communities
Recommended from our members
BASIL: A Cloud Platform for Sharing and Reusing SPARQL Queries as Web APIs
One of the reasons why Web APIs are more used to consume open data compared to SPARQL endpoints is the expertise required to use the query language. Therefore, a tool for sharing and reusing existing real queries could help developers on adopting Linked Data. We propose BASIL, a cloud platform that supports sharing and reusing SPARQL queries. In BASIL, loaded queries generate Web APIs that can be used in applications instead of embedding the call to the SPARQL endpoints, thus facilitating query maintenance and evolution. Compared to similar solutions, BASIL aims on minimising the learning curve for users to promote its adoption. BASIL is a simple platform that does not introduce new specifications, formalisms and technologies for users that belong to both Web APIs and Linked Data communities
Survival and divergence in a small group: The extraordinary genomic history of the endangered Apennine brown bear stragglers
About 100 km east of Rome, in the central Apennine Mountains, a critically endangered population of ∼50 brown bears live in complete
isolation. Mating outside this population is prevented by several 100 km of bear-free territories. We exploited this natural
experiment to better understand the gene and genomic consequences of surviving at extremely small population size. We found
that brown bear populations in Europe lost connectivity since Neolithic times, when farming communities expanded and forest burning
was used for land clearance. In central Italy, this resulted in a 40-fold population decline. The overall genomic impact of this decline
included the complete loss of variation in the mitochondrial genome and along long stretches of the nuclear genome. Several private and
deleterious amino acid changes were fixed by random drift; predicted effects include energy deficit, muscle weakness, anomalies
in cranial and skeletal development, and reduced aggressiveness. Despite this extreme loss of diversity, Apennine bear genomes show
nonrandom peaks of high variation, possibly maintained by balancing selection, at genomic regions significantly enriched for genes
associated with immune and olfactory systems. Challenging the paradigm of increased extinction risk in small populations, we suggest
that random fixation of deleterious alleles (i) can be an important driver of divergence in isolation, (ii) can be tolerated when balancing
selection prevents random loss of variation at important genes, and (iii) is followed by or results directly in favorable behavioral changes
Survival and divergence in a small group: The extraordinary genomic history of the endangered Apennine brown bear stragglers
About 100 km east of Rome, in the central Apennine Mountains, a critically endangered population of ∼50 brown bears live in complete isolation. Mating outside this population is prevented by several 100 km of bear-free territories. We exploited this natural experiment to better understand the gene and genomic consequences of surviving at extremely small population size. We found that brown bear populations in Europe lost connectivity since Neolithic times, when farming communities expanded and forest burning was used for land clearance. In central Italy, this resulted in a 40-fold population decline. The overall genomic impact of this decline included the complete loss of variation in the mitochondrial genome and along long stretches of the nuclear genome. Several private and deleterious amino acid changes were fixed by random drift; predicted effects include energy deficit, muscle weakness, anomalies in cranial and skeletal development, and reduced aggressiveness. Despite this extreme loss of diversity, Apennine bear genomes show nonrandom peaks of high variation, possibly maintained by balancing selection, at genomic regions significantly enriched for genes associated with immune and olfactory systems. Challenging the paradigm of increased extinction risk in small populations, we suggest that random fixation of deleterious alleles (i) can be an important driver of divergence in isolation, (ii) can be tolerated when balancing selection prevents random loss of variation at important genes, and (iii) is followed by or results directly in favorable behavioral changes.Additional co-authors: Claudio Groff, Ladislav Paule, Leonardo Gentile, Carles Vilà , Saverio Vicario, Luigi Boitani, Ludovic Orlando, Silvia Fuselli, Cristiano Vernesi, Beth Shapiro, Paolo Ciucci, and Giorgio Bertorell
Flash flood forecasting using Data-Based Mechanistic models and radar rainfall forecasts
The parsimonious time series models used within the Data-Based Mechanistic (DBM) modelling framework have been shown to provide reliable accurate forecasts in many hydrological situations. In this work the DBM methodology is applied to forecast discharges during a flash flood in a small Alpine catchment. In comparison to previous work this catchment responds rapidly to rainfall. It is demonstrated, by example, that the use of a radar-derived ensemble quantitative precipitation forecast coupled to a DBM model allows the forecast horizon to be increased to a level useful for emergency response. A treatment of the predictive uncertainty in the resulting hydrological forecasts is discussed and illustrated
The potential of radar-based ensemble forecasts for flash-flood early warning in the southern Swiss Alps
This study explores the limits of radar-based forecasting for hydrological runoff prediction. Two novel radar-based ensemble forecasting chains for flash-flood early warning are investigated in three catchments in the southern Swiss Alps and set in relation to deterministic discharge forecasts for the same catchments. The first radar-based ensemble forecasting chain is driven by NORA (Nowcasting of Orographic Rainfall by means of Analogues), an analogue-based heuristic nowcasting system to predict orographic rainfall for the following eight hours. The second ensemble forecasting system evaluated is REAL-C2, where the numerical weather prediction COSMO-2 is initialised with 25 different initial conditions derived from a four-day nowcast with the radar ensemble REAL. Additionally, three deterministic forecasting chains were analysed. The performance of these five flash-flood forecasting systems was analysed for 1389 h between June 2007 and December 2010 for which NORA forecasts were issued, due to the presence of orographic forcing.
A clear preference was found for the ensemble approach. Discharge forecasts perform better when forced by NORA and REAL-C2 rather then by deterministic weather radar data. Moreover, it was observed that using an ensemble of initial conditions at the forecast initialisation, as in REAL-C2, significantly improved the forecast skill. These forecasts also perform better then forecasts forced by ensemble rainfall forecasts (NORA) initialised form a single initial condition of the hydrological model. Thus the best results were obtained with the REAL-C2 forecasting chain. However, for regions where REAL cannot be produced, NORA might be an option for forecasting events triggered by orographic precipitation.ISSN:1027-5606ISSN:1607-793
Recommended from our members
Publication, discovery and interoperability of Clinical Decision Support Systems: A Linked Data approach
BACKGROUNDThe high costs involved in the development of Clinical Decision Support Systems (CDSS) make it necessary to share their functionality across different systems and organizations. Service Oriented Architectures (SOA) have been proposed to allow reusing CDSS by encapsulating them in a Web service. However, strong barriers in sharing CDS functionality are still present as a consequence of lack of expressiveness of services' interfaces. Linked Services are the evolution of the Semantic Web Services paradigm to process Linked Data. They aim to provide semantic descriptions over SOA implementations to overcome the limitations derived from the syntactic nature of Web services technologies.OBJECTIVETo facilitate the publication, discovery and interoperability of CDS services by evolving them into Linked Services that expose their interfaces as Linked Data.MATERIALS AND METHODSWe developed methods and models to enhance CDS SOA as Linked Services that define a rich semantic layer based on machine interpretable ontologies that powers their interoperability and reuse. These ontologies provided unambiguous descriptions of CDS services properties to expose them to the Web of Data.RESULTSWe developed models compliant with Linked Data principles to create a semantic representation of the components that compose CDS services. To evaluate our approach we implemented a set of CDS Linked Services using a Web service definition ontology. The definitions of Web services were linked to the models developed in order to attach unambiguous semantics to the service components. All models were bound to SNOMED-CT and public ontologies (e.g. Dublin Core) in order to count on a lingua franca to explore them. Discovery and analysis of CDS services based on machine interpretable models was performed reasoning over the ontologies built.DISCUSSIONLinked Services can be used effectively to expose CDS services to the Web of Data by building on current CDS standards. This allows building shared Linked Knowledge Bases to provide machine interpretable semantics to the CDS service description alleviating the challenges on interoperability and reuse. Linked Services allow for building 'digital libraries' of distributed CDS services that can be hosted and maintained in different organizations
Capturing the currency of DBpedia descriptions and get insight into their validity
An increasing amount of data is published and consumed on the Web according to the Linked Open Data (LOD) paradigm. In such scenario, capturing the age of data can provide insight about their validity under the hypothesis that more up-to-date data is, more likely is to be true. In this paper we present a model and a framework for assessing the currency of the data represented in one of the most important LOD datasets, DBpedia. Existing currency metrics are based on the notion of date of last modification, but often such information is not explicitly provided by data producers. The proposed framework extrapolates such temporal metadata from time-labeled revisions of Wikipedia pages (from which data has been extracted). Experimental results demonstrate the usefulness of the framework and the effectiveness of the currency evaluation model to provide a reliable indicator of the validity of facts represented in DBpedia